Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 456
Filter
Add filters

Document Type
Year range
1.
AIP Conference Proceedings ; 2655, 2023.
Article in English | Scopus | ID: covidwho-20245510

ABSTRACT

The objective is to detect Novel Social Distancing using Local Binary Pattern (LBP) in comparison with Principal Component Analysis (PCA). Social Distance deduction is performed using Local Binary Pattern(N=20) and Principal Component Analysis(N=20) algorithms. Google AI open Images dataset is used for image detection. Dataset contains more than 10,000 images. Accuracy of Principal Component Analysis is 89.8% and Local Binary Pattern is 93.9%. There exists a statistical significant difference between LBP and PCA with (p<0.05). Local Binary Pattern appears to perform significantly better than Principal Component Analysis for Social Distancing Detection. © 2023 Author(s).

2.
Proceedings of SPIE - The International Society for Optical Engineering ; 12592, 2023.
Article in English | Scopus | ID: covidwho-20245093

ABSTRACT

Owing to the impact of COVID-19, the venues for dancers to perform have shifted from the stage to the media. In this study, we focus on the creation of dance videos that allow audiences to feel a sense of excitement without disturbing their awareness of the dance subject and propose a video generation method that links the dance and the scene by utilizing a sound detection method and an object detection algorithm. The generated video was evaluated using the Semantic Differential method, and it was confirmed that the proposed method could transform the original video into an uplifting video without any sense of discomfort. © 2023 SPIE.

3.
2023 3rd International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies, ICAECT 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20244302

ABSTRACT

Healthcare systems all over the world are strained as the COVID-19 pandemic's spread becomes more widespread. The only realistic strategy to avoid asymptomatic transmission is to monitor social distance, as there are no viable medical therapies or vaccinations for it. A unique computer vision-based framework that uses deep learning is to analyze the images that are needed to measure social distance. This technique uses the key point regressor to identify the important feature points utilizing the Visual Geometry Group (VGG19) which is a standard Convolutional Neural Network (CNN) architecture having multiple layers, MobileNetV2 which is a computer vision network that advances the-state-of-art for mobile visual identification, including semantic segmentation, classification and object identification. VGG19 and MobileNetV2 were trained on the Kaggle dataset. The border boxes for the item may be seen as well as the crowd is sizeable, and red identified faces are then analyzed by MobileNetV2 to detect whether the person is wearing a mask or not. The distance between the observed people has been calculated using the Euclidian distance. Pretrained models like (You only look once) YOLOV3 which is a real-time object detection system, RCNN, and Resnet50 are used in our embedded vision system environment to identify social distance on images. The framework YOLOV3 performs an overall accuracy of 95% using transfer learning technique runs in 22ms which is four times fast than other predefined models. In the proposed model we achieved an accuracy of 96.67% using VGG19 and 98.38% using MobileNetV2, this beats all other models in its ability to estimate social distance and face mask. © 2023 IEEE.

4.
Proceedings - 2022 2nd International Symposium on Artificial Intelligence and its Application on Media, ISAIAM 2022 ; : 43-47, 2022.
Article in English | Scopus | ID: covidwho-20243436

ABSTRACT

With the upgrading and innovation of the logistics industry, the requirements for the level of transportation smart technologies continue to increase. The outbreak of the COVID-19 has further promoted the development of unmanned transportation machines. Aimed at the requirements of intelligent following and automatic obstacle avoidance of mobile robots in dynamic and complex environments, this paper uses machine vision to realize the visual perception function, and studies the real-time path planning of robots in complicated environment. And this paper proposes the Dijkstra-ant colony optimization (ACO) fusion algorithm, the environment model is established by the link viewable method, the Dijkstra algorithm plans the initial path. The introduction of immune operators improves the ant colony algorithm to optimize the initial path. Finally, the simulation experiment proves that the fusion algorithm has good reliability in a dynamic environment. © 2022 IEEE.

5.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20243125

ABSTRACT

Facial expression recognition (FER) algorithms work well in constrained environments with little or no occlusion of the face. However, real-world face occlusion is prevalent, most notably with the need to use a face mask in the current Covid-19 scenario. While there are works on the problem of occlusion in FER, little has been done before on the particular face mask scenario. Moreover, the few works in this area largely use synthetically created masked FER datasets. Motivated by these challenges posed by the pandemic to FER, we present a novel dataset, the Masked Student Dataset of Expressions or MSD-E, consisting of 1,960 real-world non-masked and masked facial expression images collected from 142 individuals. Along with the issue of obfuscated facial features, we illustrate how other subtler issues in masked FER are represented in our dataset. We then provide baseline results using ResNet-18, finding that its performance dips in the non-masked case when trained for FER in the presence of masks. To tackle this, we test two training paradigms: contrastive learning and knowledge distillation, and find that they increase the model's performance in the masked scenario while maintaining its non-masked performance. We further visualise our results using t-SNE plots and Grad-CAM, demonstrating that these paradigms capitalise on the limited features available in the masked scenario. Finally, we benchmark SOTA methods on MSD-E. The dataset is available at https://github.com/SridharSola/MSD-E. © 2022 ACM.

6.
CEUR Workshop Proceedings ; 3382, 2022.
Article in English | Scopus | ID: covidwho-20242636

ABSTRACT

The pandemic of the coronavirus disease 2019 has shown weakness and threats in various fields of human activity. In turn, the World Health Organization has recommended different preventive measures to decrease the spreading of coronavirus. Nonetheless, the world community ought to be ready for worldwide pandemics in the closest future. One of the most productive approaches to prevent spreading the virus is still using a face mask. This case has required staff who would verify visitors in public areas to wear masks. The aim of this paper was to identify persons remotely who wore masks or not, and also inform the personnel about the status through the message queuing telemetry transport as soon as possible using the edge computing paradigm. To solve this problem, we proposed to use the Raspberry Pi with a camera as an edge device, as well as the TensorFlow framework for pre-processing data at the edge. The offered system is developed as a system that could be introduced into the entrance of public areas. Experimental results have shown that the proposed approach was able to optimize network traffic and detect persons without masks. This study can be applied to various closed and public areas for monitoring situations. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

7.
AIP Conference Proceedings ; 2779, 2023.
Article in English | Scopus | ID: covidwho-20241125

ABSTRACT

The word Taxonomy is the way of Classification. It is the science of naming and classifying all the living organisms as well as extinct organisms of the world. Swedish Botanist Carlous Linnaeus was the father of taxonomy;Out of 17000 plant species present in India, more than 7600 plants are medicinal plants. Indigenous Indian medicines are formulations of traditional knowledge and medicinal plant extracts. The traditional knowledge is transferred from one generation to other generations which is used as drug for various diseases, instead of relying on what is the ingredients and proportions these drugs are based on traditional knowledge. These drugs involve the use of plant extract. The World Health Organization (WHO), leading agency in health care found that 80 % population in low economic output countries depend on traditional medicine for their essential health care[1]. In the current era of pandemic medicinal plant species like citrus spp, allium sativum, allium cepa found effective in management of COVID 19. As per WHO guidelines, In the field of medicinal research where clinical trials are used for new drug discovery, there is need of continuous supply of authenticated products which are correctly identified, classified, and verified [1]. Traditional identification and classification methods are not quick, efficient and reliable. Automated Classification of medicinal Plants help to conserve knowledge of medicinal plant species, share it from one generation to next generation and help the whole society to improve the knowledge about medicinal plants. The paper presents traditional and recent trends using Computer vision and machine learning for classifying medicinal plant species. The main focus is on Leaf image as input. It presents the challenges as well as opportunities in identifying and classifying medicinal plant species by performing comprehensive review of traditional methodologies. © 2023 Author(s).

8.
2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20240818

ABSTRACT

This study compared five different image classification algorithms, namely VGG16, VGG19, AlexNet, DenseNet, and ConVNext, based on their ability to detect and classify COVID-19-related cases given chest X-ray images. Using performance metrics like accuracy, F1 score, precision, recall, and MCC compared these intelligent classification algorithms. Upon testing these algorithms, the accuracy for each model was quite unsatisfactory, ranging from 80.00% to 92.50%, provided it is for medical application. As such, an ensemble learning-based image classification model, made up of AlexNet and VGG19 called CovidXNet, was proposed to detect COVID-19 through chest X-ray images discriminating between health and pneumonic lung images. CovidXNet achieved an accuracy of 97.00%, which was significantly better considering past results. Further studies may be conducted to increase the accuracy, particularly for identifying and classifying chest radiographs for COVID-19-related cases, since the current model may still provide false negatives, which may be detrimental to the prevention of the spread of the virus. © 2022 IEEE.

9.
Proceedings of SPIE - The International Society for Optical Engineering ; 12599, 2023.
Article in English | Scopus | ID: covidwho-20238661

ABSTRACT

During the COVID-19 coronavirus epidemic, people usually wear masks to prevent the spread of the virus, which has become a major obstacle when we use face-based computer vision techniques such as face recognition and face detection. So masked face inpainting technique is desired. Actually, the distribution of face features is strongly correlated with each other, but existing inpainting methods typically ignore the relationship between face feature distributions. To address this issue, in this paper, we first show that the face image inpainting task can be seen as a distribution alignment between face features in damaged and valid regions, and style transfer is a distribution alignment process. Based on this theory, we propose a novel face inpainting model considering the probability distribution between face features, namely Face Style Self-Transfer Network (FaST-Net). Through the proposed style self-transfer mechanism, FaST-Net can align the style distribution of features in the inpainting region with the style distribution of features in the valid region of a face. Ablation studies have validated the effectiveness of FaST-Net, and experimental results on two popular human face datasets (CelebA and VGGFace) exhibit its superior performance compared with existing state-of-the-art methods. © 2023 SPIE.

10.
Proceedings - 2022 2nd International Symposium on Artificial Intelligence and its Application on Media, ISAIAM 2022 ; : 135-139, 2022.
Article in English | Scopus | ID: covidwho-20236902

ABSTRACT

Deep learning (DL) approaches for image segmentation have been gaining state-of-the-art performance in recent years. Particularly, in deep learning, U-Net model has been successfully used in the field of image segmentation. However, traditional U-Net methods extract features, aggregate remote information, and reconstruct images by stacking convolution, pooling, and up sampling blocks. The traditional approach is very inefficient due of the stacked local operators. In this paper, we propose the multi-attentional U-Net that is equipped with non-local blocks based self-attention, channel-attention, and spatial-attention for image segmentation. These blocks can be inserted into U-Net to flexibly aggregate information on the plane and spatial scales. We perform and evaluate the multi-attentional U-Net model on three benchmark data sets, which are COVID-19 segmentation, skin cancer segmentation, thyroid nodules segmentation. Results show that our proposed models achieve better performances with faster computation and fewer parameters. The multi-attention U-Net can improve the medical image segmentation results. © 2022 IEEE.

11.
2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20235764

ABSTRACT

Face masks have been widely used since the start of the COVID-19 pandemic. Facial detection and recognition technologies, such as the iPhone's Face ID, heavily rely on seeing the facial features that are now obscured due to wearing a face mask. Currently, the only way to utilize Face ID with a mask on is by having an Apple Watch as well. As such, this paper intends to find initial means of a reliable personal facial recognition system while the user is wearing a face mask without having the need for an Apple Watch. This may also be applicable to other security systems or measures. Through the use of Multi-Task Cascaded Convolutional Networks or MTCNN, a type of neural network which identifies faces and facial landmarks, and FaceNet, a deep neural network utilized for deriving features from a picture of a face, the masked face of the user could be identified and more importantly be recognized. Utilizing MTCNN, detecting the masked faces and automatically cropping them from the raw images are done. The learning phase then takes place wherein the exposed facial features are given emphasis while the masks themselves are excluded as a factor in recognition. Data in the form of images are acquired from taking multiple pictures of a certain individual's face as well as from repositories online for other people's faces. Images used are taken in various settings or modes such as different lighting levels, facial angles, head angles, colors and designs of face masks, and the presence or absence of glasses. The goal is to recognize whether it is the certain individual or not in the image. The training accuracy is 99.966% while the test accuracy is 99.921%. © 2022 IEEE.

12.
2022 International Conference on Technology Innovations for Healthcare, ICTIH 2022 - Proceedings ; : 34-37, 2022.
Article in English | Scopus | ID: covidwho-20235379

ABSTRACT

Training a Convolutional Neural Network (CNN) is a difficult task, especially for deep architectures that estimate a large number of parameters. Advanced optimization algorithms should be used. Indeed, it is one of the most important steps to reduce the error between the ground truth and the model prediction. In this sense, many methods have been proposed to solve the optimization problems. In general, regularization, more specifically, non-smooth regularization, can be used in order to build sparse networks, which make the optimization task difficult. The main aim is to develop a novel optimizer based on Bayesian framework. Promising results are obtained when our optimizer is applied on classification of Covid-19 images. By using the proposed approach, an accuracy rate equal to 94% is obtained surpasses all the competing optimizers that do not exceed an accuracy rate of 86%, and 84% for standard Deep Learning optimizers. © 2022 IEEE.

13.
4th International Conference on Electrical, Computer and Telecommunication Engineering, ICECTE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20234930

ABSTRACT

In recent years, a lot of research works have been done on object detection using various machine learning models. However, not many works have been done on detecting and tracking humans in particular. This study works with the YOLOv4 object detector to detect humans to use the detections for maintaining social distance. For this study, the YOLOv4 model is trained on only one class named 'Person'. This is done to improve the speed of detecting humans in real time scenario with satisfying accuracy of 97% to 99%. These detections are then tracked to build a system for maintaining social distance and alerting the authority if a breach in the social distance is detected. This system can be applied at ticket counters, hospitals, offices, factories etc. It can also be used for maintaining social distance among the students and the teachers in the classroom for their safety. © 2022 IEEE.

14.
Proceedings of SPIE - The International Society for Optical Engineering ; 12462, 2023.
Article in English | Scopus | ID: covidwho-20234924

ABSTRACT

The topic of non-contact diagnosis became a hot topic during COVID-19 and online consultation gained popularity. In this research, a deep learning-based autonomous limb evaluation system is developed for online consultation and remote rehabilitation training for people with physical limitations. Its main goal is to collect and analyze information about limb states. The patient can evaluate the limb state at home using the mobile app, and the doctor can view the data and connect with the patient via the web's chat module to offer diagnostic opinions. Deep learning is used for the Start/End Attitude Determination Model and OpenCV for the limb and hand evaluation model, with the results being uploaded to the server. © The Authors. Published under a Creative Commons Attribution CC-BY 3.0 License.

15.
Proceedings - 2022 5th International Conference on Electronics and Electrical Engineering Technology, EEET 2022 ; : 1-8, 2022.
Article in English | Scopus | ID: covidwho-20232994

ABSTRACT

Contact tracing is one of the methods used by the government and organizations for controlling viral diseases like COVID-19, which claimed many human lives. Social distancing is advised to everyone to minimize the virus from spreading. This study aims to build a contact tracing tool that monitors social distancing individually using computer vision in real-time. Object tracking by detection is used for individual monitoring with YOLOv4 (You Only Look Once) as the object detector and SORT (Simple Online and Real-time Tracking) as the object tracker. The combination gained an average streaming and detection frame rate of 26 FPS and 10 FPS on NVIDIA's GTX 1650, respectively. It is expected to have more frame rate when used in a more powerful device. Moreover, the system obtained 98.2% accuracy in measuring the distance between individuals. Furthermore, the performance of the QR scanner used in the study attains a 100% success rate and a 98% accuracy in allocating the QR code to the correct owner from the video stream. © 2022 IEEE.

16.
Proceedings of 2023 3rd International Conference on Innovative Practices in Technology and Management, ICIPTM 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20232653

ABSTRACT

COVID-19 is one of the threats that came out of nowhere and literally shook the entire world. Various prediction techniques have been invented in a very short time. This study also develops a Deep Learning (DL) model which can predict the presence of COVID-19 and pneumonia by analyzing the X-ray images of human lungs. From Kaggle, a collection of X-ray images of the lungs is collected. Then, this dataset is preprocessed using two alternative methods. Some of the techniques include image enhancement and picture resizing. The two deep-learning models are then trained using the preprocessed dataset. A few more examples of DL algorithms include MobileNet and Inception-V3. The best model is then selected by validating the learned deep-learning models. As the epochs count increases during training and validation, the accuracy value for both models increases. The value of the loss increases as the number of epochs decreases. During the fourteenth validation period, the model generates a loss value of 0.32 for the MobileNet technique. During the first few training epochs, accuracy is lower, and by the fifteenth, it is close to 0.9. The Inception-V3 method produces a loss value of 0.1452 at the eleventh validation epoch, which is the lowest value. The greatest accuracy value of 0.9697 is obtained after the twelfth cycle of validation. The model that performs better and has lower loss values is then put through one last test. Inception-V3 is therefore selected as the top method for COVID-19 detection. The Inception-V3 system properly predicted each of the normal images and the COVID-19 images in the final test. Regarding pneumonia, it correctly predicted just one image out of 20 that are so small as to be disregarded. When a patient cannot afford to find a doctor for consultation, the DL model created in this work can be utilized as a preliminary test for COVID-19. By including the model created in this study as a backend processor for a website or software application, the study's findings can be updated. © 2023 IEEE.

17.
Pakistan Journal of Medical and Health Sciences ; 17(4):213-217, 2023.
Article in English | EMBASE | ID: covidwho-20232597

ABSTRACT

Aim: To determine the effect of COVID-19 on eye sight due to increase screen time in undergraduate students of medical school. Study design: Cross-sectional study. Place and duration of study: This survey was carried out from October 2022 to December 2022 in Army Medical College Rawalpindi. Questionnaires were filled in person and also online-based platform was used to distribute the e-questionnaire, developed using the Google Form. The participants were asked to share the e-questionnaire with their friends using Facebook and Messenger. Method(s): Participants were selected for the study using non-probability consecutive sampling. College students of 20-25 years were included in the study. Sample size was 400 according to a study done internationally. Participants with comorbidities (cataract, glaucoma) were excluded from study. Participants having (trouble concentrating on things such as reading the newspaper, books or watching television) were included in the study. Digital eye strain was calculated using validated computer vision syndrome (CVS-Q) questionnaire to measure the symptoms such as eye fatigue, headache, blurred vision, double vision, itching eyes, dryness, tears, eye redness and pain, excessive blinking, feeling of a foreign body, burning or irritation, difficulty in focusing for near vision, feeling of sight worsening, and sensitivity to light. Qualitative data was analyzed using Chi square test. Results A total number of 470 responses were recorded, out of which 257 (54.7%) were males and 213(45.3%) were females. In our study, the most common symptom was headache, affecting 58.1% of the population before COVID 19 which has increased to 83.2% and the P value is less than 0.001.Theother symptoms which also showed P value less than 0.001 were blurred vision while using digital device, irritated or burning eyes, dry eyes and sensitivity to bright light. Conclusion The practical implication of the study is to create awareness among general population about COVID, that eye sight is Bull`s Target to be affected by it and simple preventing measures can be taken. The purpose of this study is to limelight the importance that during COVID 19 lockdown the excessive use of digital devices and their cons on the ocular health among future health care workers.Copyright © 2023 Lahore Medical And Dental College. All rights reserved.

18.
Multimed Syst ; : 1-10, 2021 Apr 28.
Article in English | MEDLINE | ID: covidwho-20235865

ABSTRACT

The demand for automatic detection of Novel Coronavirus or COVID-19 is increasing across the globe. The exponential rise in cases burdens healthcare facilities, and a vast amount of multimedia healthcare data is being explored to find a solution. This study presents a practical solution to detect COVID-19 from chest X-rays while distinguishing those from normal and impacted by Viral Pneumonia via Deep Convolution Neural Networks (CNN). In this study, three pre-trained CNN models (EfficientNetB0, VGG16, and InceptionV3) are evaluated through transfer learning. The rationale for selecting these specific models is their balance of accuracy and efficiency with fewer parameters suitable for mobile applications. The dataset used for the study is publicly available and compiled from different sources. This study uses deep learning techniques and performance metrics (accuracy, recall, specificity, precision, and F1 scores). The results show that the proposed approach produced a high-quality model, with an overall accuracy of 92.93%, COVID-19, a sensitivity of 94.79%. The work indicates a definite possibility to implement computer vision design to enable effective detection and screening measures.

19.
Multimed Syst ; : 1-15, 2021 Jul 28.
Article in English | MEDLINE | ID: covidwho-20232941

ABSTRACT

Unmanned Air Vehicles (UAVs) are becoming popular in real-world scenarios due to current advances in sensor technology and hardware platform development. The applications of UAVs in the medical field are broad and may be shared worldwide. With the recent outbreak of COVID-19, fast diagnostic testing has become one of the challenges due to the lack of test kits. UAVs can help in tackling the COVID-19 by delivering medication to the hospital on time. In this paper, to detect the number of COVID-19 cases in a hospital, we propose a deep convolution neural architecture using transfer learning, classifying the patient into three categories as COVID-19 (positive) and normal (negative), and pneumonia based on given X-ray images. The proposed deep-learning architecture is compared with state-of-the-art models. The results show that the proposed model provides an accuracy of 94.92%. Further to offer time-bounded services to COVID-19 patients, we have proposed a scheme for delivering emergency kits to the hospitals in need using an optimal path planning approach for UAVs in the network.

20.
Multimed Tools Appl ; : 1-32, 2023 May 26.
Article in English | MEDLINE | ID: covidwho-20244166

ABSTRACT

Multimedia data plays an important role in medicine and healthcare since EHR (Electronic Health Records) entail complex images and videos for analyzing patient data. In this article, we hypothesize that transfer learning with computer vision can be adequately harnessed on such data, more specifically chest X-rays, to learn from a few images for assisting accurate, efficient recognition of COVID. While researchers have analyzed medical data (including COVID data) using computer vision models, the main contributions of our study entail the following. Firstly, we conduct transfer learning using a few images from publicly available big data on chest X-rays, suitably adapting computer vision models with data augmentation. Secondly, we aim to find the best fit models to solve this problem, adjusting the number of samples for training and validation to obtain the minimum number of samples with maximum accuracy. Thirdly, our results indicate that combining chest radiography with transfer learning has the potential to improve the accuracy and timeliness of radiological interpretations of COVID in a cost-effective manner. Finally, we outline applications of this work during COVID and its recovery phases with future issues for research and development. This research exemplifies the use of multimedia technology and machine learning in healthcare.

SELECTION OF CITATIONS
SEARCH DETAIL